Analysis of the Memory Registration Process in the Mellanox InfiniBand Software Stack
نویسندگان
چکیده
To leverage high speed interconnects like InfiniBand it is important to minimize the communication overhead. The most interfering overhead is the registration of communication memory. In this paper, we present our analysis of the memory registration process inside the Mellanox InfiniBand driver and possible ways out of this bottleneck. We evaluate and characterize the most time consuming parts in the execution path of the memory registration function using the Read Time Stamp Counter (RDTSC) instruction. We present measurements on AMD Opteron and Intel Xeon systems with different types of Host Channel Adapters for PCI-X and PCI-Express. Finally, we conclude with first results using Linux hugepage support to shorten the time of registering a memory region.
منابع مشابه
A General-purpose Api for Iwarp and Infiniband
Remote Direct Memory Access (RDMA) allows data to be transferred over a network directly from the memory of one computer to the memory of another computer without CPU intervention. There are two major types of RDMA hardware on the market today: InfiniBand, and RDMA over IP, also known as iWARP. This hardware is supported by open software that was developed by the OpenFabrics Alliance (OFA) and ...
متن کاملCan NIC Memory in InfiniBand Benefit Communication Performance? — A Study with Mellanox Adapter
This paper presents a comprehensive micro-benchmark performance evaluation on using NIC memory in the Mellanox InfiniBand adapter. Three main benefits have been explored, including non-blocking and high performance host/NIC data movement, traffic reduction of the local interconnect, and avoidance of the local interconnect bottleneck. Two case studies have been carried out to show how these bene...
متن کاملAn MPICH2 Channel Device Implementation over VAPI on InfiniBand
MPICH2, the successor of one of the most popular open source message passing implementations, aims to fully support the MPI-2 standard. Due to a complete redesign, MPICH2 is also cleaner, more flexible, and faster. The InfiniBand network technology is an open industry standard and provides high bandwidth and low latency, as well as reliability, availability, serviceability (RAS) features. It is...
متن کاملPerformance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand
....................................................................................................................................................
متن کاملIntroduction SAND2005-1555P Unlimited Release
The InfiniBand technology (IB) development community, e.g. Mellanox, Voltaire, Topspin, InfiniCon, etc. are deploying their next generation NICs and switches [1, 2, 3, 4, 5]. The NICs are based on the PCI Express bus and the switches employ a 24 port switch chip developed by Mellanox. This new switch chip has led to the development of high port count (e.g. 144 and 288 ports) standalone switches...
متن کامل